Load libraries

library(brms)
Loading required package: Rcpp
Loading 'brms' package (version 2.16.1). Useful instructions
can be found by typing help('brms'). A more detailed introduction
to the package is available through vignette('brms_overview').

Attaching package: ‘brms’

The following object is masked from ‘package:lme4’:

    ngrps

The following object is masked from ‘package:stats’:

    ar
library(tidyr)

Attaching package: ‘tidyr’

The following objects are masked from ‘package:Matrix’:

    expand, pack, unpack
library(ggplot2)
library(ggridges)

Utility function

Inverse logit function for converting fitted models into binomial probabilities

logistic <- function (x) 
{
  p <- 1/(1 + exp(-x))
  p <- ifelse(x == Inf, 1, p)
  p
}

Experiment 1

load("./updated-initial-mimicry-data.RData")
initial.mimicry.data <- initial.mimicry.data[which(!is.na(initial.mimicry.data$Correct)),]
bprior <- bprior <- c(prior_string("normal(0,1)", class = "b"))
experiment1.fitb <- brm(Correct~(Try-1)+(1|Participant), prior=bprior, data=initial.mimicry.data, family="bernoulli") # this will take longer than glmer
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL '777279a8efb001aa059dd4fd0c635b9a' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.001179 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 11.79 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 7.47234 seconds (Warm-up)
Chain 1:                5.28273 seconds (Sampling)
Chain 1:                12.7551 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL '777279a8efb001aa059dd4fd0c635b9a' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000401 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 4.01 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 7.20474 seconds (Warm-up)
Chain 2:                6.13622 seconds (Sampling)
Chain 2:                13.341 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL '777279a8efb001aa059dd4fd0c635b9a' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.000561 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 5.61 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 6.98427 seconds (Warm-up)
Chain 3:                5.76462 seconds (Sampling)
Chain 3:                12.7489 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL '777279a8efb001aa059dd4fd0c635b9a' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.000397 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 3.97 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 7.83292 seconds (Warm-up)
Chain 4:                5.86883 seconds (Sampling)
Chain 4:                13.7017 seconds (Total)
Chain 4: 
summary(experiment1.fitb) 
 Family: bernoulli 
  Links: mu = logit 
Formula: Correct ~ (Try - 1) + (1 | Participant) 
   Data: initial.mimicry.data (Number of observations: 8006) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~Participant (Number of levels: 49) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.45      0.06     0.35     0.57 1.01      988     1098

Population-Level Effects: 
        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
TryTry1     0.66      0.07     0.51     0.81 1.00      946     1539
TryTry4     0.68      0.08     0.54     0.84 1.00      969     1355

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
experiment1.fit <- glmer(Correct~(Try-1)+(1|Participant), data=initial.mimicry.data, family="binomial") 
summary(experiment1.fit)
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
 Family: binomial  ( logit )
Formula: Correct ~ (Try - 1) + (1 | Participant)
   Data: initial.mimicry.data

     AIC      BIC   logLik deviance df.resid 
 10060.5  10081.4  -5027.2  10054.5     8003 

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-2.0121 -1.2003  0.6229  0.7151  1.3841 

Random effects:
 Groups      Name        Variance Std.Dev.
 Participant (Intercept) 0.1817   0.4263  
Number of obs: 8006, groups:  Participant, 49

Fixed effects:
         Estimate Std. Error z value Pr(>|z|)    
TryTry 1  0.66702    0.06902   9.665   <2e-16 ***
TryTry 4  0.69482    0.07209   9.638   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Correlation of Fixed Effects:
         TryTr1
TryTry 4 0.746 
prior_summary(experiment1.fitb)
                prior class      coef       group resp dpar nlpar bound       source
          normal(0,1)     b                                                     user
          normal(0,1)     b   TryTry1                                   (vectorized)
          normal(0,1)     b   TryTry4                                   (vectorized)
 student_t(3, 0, 2.5)    sd                                                  default
 student_t(3, 0, 2.5)    sd           Participant                       (vectorized)
 student_t(3, 0, 2.5)    sd Intercept Participant                       (vectorized)
#posterior predictive check

pp_check(experiment1.fitb)
Using 10 posterior draws for ppc type 'dens_overlay' by default.

# Extract MCMC samples
experiment1.fitb.post <- posterior_samples(experiment1.fitb)
Warning: Method 'posterior_samples' is deprecated. Please see ?as_draws for recommended alternatives.
# Compute CI in after conversion into probability 
quantile(logistic(experiment1.fitb.post$b_TryTry1),c(0.025,0.975))
     2.5%     97.5% 
0.6255559 0.6910574 
quantile(logistic(experiment1.fitb.post$b_TryTry4),c(0.025,0.975))
     2.5%     97.5% 
0.6314956 0.6979727 
# Compute the CI of the *difference* betweeb the two tries
quantile(logistic(experiment1.fitb.post$b_TryTry4)-logistic(experiment1.fitb.post$b_TryTry1),c(0.025,0.975))
       2.5%       97.5% 
-0.01633002  0.02878057 
hist(logistic(experiment1.fitb.post$b_TryTry4)-logistic(experiment1.fitb.post$b_TryTry1))

experiment1.fitb.post.long = pivot_longer(experiment1.fitb.post,cols=c(b_TryTry1,b_TryTry4))
experiment1.fitb.post.long$value = logistic(experiment1.fitb.post.long$value)


ggplot(experiment1.fitb.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c('One','Four')) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("Try Number") +
  theme_ridges()
Picking joint bandwidth of 0.00291

Experiment 2

load("./mimicry-analyses.RData")

# convert to actual numbers
data.amal.long$Correct <- as.numeric(data.amal.long$Correct)-1
data.amal.long <- data.amal.long[!is.na(data.amal.long$Correct),]

Model for overall probability of success

iprior <- c(prior_string("normal(0,5)", class = "Intercept"))
experiment2.fit1 <- brm(Correct~1+(1|Participant), prior=iprior, data=data.amal.long, family=bernoulli)
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL '2aea5ea2236cd741b07c87f3c8dce623' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 9.2e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.92 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2.00989 seconds (Warm-up)
Chain 1:                1.16075 seconds (Sampling)
Chain 1:                3.17064 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL '2aea5ea2236cd741b07c87f3c8dce623' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 8.8e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.88 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 2.07088 seconds (Warm-up)
Chain 2:                2.43958 seconds (Sampling)
Chain 2:                4.51046 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL '2aea5ea2236cd741b07c87f3c8dce623' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 8.2e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.82 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 2.08774 seconds (Warm-up)
Chain 3:                2.03364 seconds (Sampling)
Chain 3:                4.12139 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL '2aea5ea2236cd741b07c87f3c8dce623' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 8.1e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.81 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 2.07084 seconds (Warm-up)
Chain 4:                1.19814 seconds (Sampling)
Chain 4:                3.26898 seconds (Total)
Chain 4: 
Warning: There were 4 divergent transitions after warmup. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems
experiment2.fit1.post <- posterior_samples(experiment2.fit1)
Warning: Method 'posterior_samples' is deprecated. Please see ?as_draws for recommended alternatives.
summary(experiment2.fit1)
Warning: There were 4 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
 Family: bernoulli 
  Links: mu = logit 
Formula: Correct ~ 1 + (1 | Participant) 
   Data: data.amal.long (Number of observations: 1252) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~Participant (Number of levels: 147) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.20      0.11     0.02     0.43 1.00     1093     2041

Population-Level Effects: 
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept     0.53      0.06     0.41     0.66 1.00     4208     2561

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
quantile(logistic(experiment2.fit1.post$b_Intercept),c(0.025,0.975))
     2.5%     97.5% 
0.6014421 0.6582900 
bprior <- bprior <- c(prior_string("normal(0,5)", class = "b"))
experiment2.fit1b <- brm(Correct~(Raised.General-1)+(1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 9.9e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.99 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 1.84622 seconds (Warm-up)
Chain 1:                1.9597 seconds (Sampling)
Chain 1:                3.80592 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000129 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.29 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 1.95526 seconds (Warm-up)
Chain 2:                1.12528 seconds (Sampling)
Chain 2:                3.08054 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.000126 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 1.26 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 1.68558 seconds (Warm-up)
Chain 3:                2.22794 seconds (Sampling)
Chain 3:                3.91352 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 8.4e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.84 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 1.67644 seconds (Warm-up)
Chain 4:                1.15072 seconds (Sampling)
Chain 4:                2.82715 seconds (Total)
Chain 4: 
experiment2.fit1b.post <- posterior_samples(experiment2.fit1b)
Warning: Method 'posterior_samples' is deprecated. Please see ?as_draws for recommended alternatives.
summary(experiment2.fit1b)
 Family: bernoulli 
  Links: mu = logit 
Formula: Correct ~ (Raised.General - 1) + (1 | Participant) 
   Data: data.amal.long (Number of observations: 1252) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~Participant (Number of levels: 147) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.21      0.12     0.01     0.44 1.00      950     1492

Population-Level Effects: 
                             Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Raised.GeneralBritishIsles       0.48      0.09     0.31     0.65 1.00     4679     2702
Raised.GeneralNorthAmerica       0.46      0.13     0.20     0.70 1.00     4865     3011
Raised.GeneralRestoftheWorld     0.70      0.13     0.45     0.94 1.00     4794     2730

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
quantile(logistic(experiment2.fit1b.post$b_Raised.GeneralBritishIsles),c(0.025,0.975))
     2.5%     97.5% 
0.5772129 0.6576632 
quantile(logistic(experiment2.fit1b.post$b_Raised.GeneralNorthAmerica),c(0.025,0.975))
     2.5%     97.5% 
0.5506386 0.6678771 
quantile(logistic(experiment2.fit1b.post$b_Raised.GeneralRestoftheWorld),c(0.025,0.975))
     2.5%     97.5% 
0.6099336 0.7199879 
#posterior predictive check

pp_check(experiment2.fit1b)
Using 10 posterior draws for ppc type 'dens_overlay' by default.

plotting out experiment 2 fit 1b

experiment2.fit1b.post.long <- pivot_longer(experiment2.fit1b.post,cols=c(b_Raised.GeneralBritishIsles,b_Raised.GeneralNorthAmerica,b_Raised.GeneralRestoftheWorld))
experiment2.fit1b.post.long$value <- logistic(experiment2.fit1b.post.long$value)

ggplot(experiment2.fit1b.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c('British Isles','North America','Rest of the World')) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("General area raised") +
  theme_ridges()
Picking joint bandwidth of 0.00446

experiment2.fit2b <- brm(Correct~Mimicry.Score+(1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
Warning: Rows containing NAs were excluded from the model.
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 6.6e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.66 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 1.67004 seconds (Warm-up)
Chain 1:                0.434662 seconds (Sampling)
Chain 1:                2.10471 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 3.8e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.38 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 1.70198 seconds (Warm-up)
Chain 2:                0.44111 seconds (Sampling)
Chain 2:                2.14309 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 3.8e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.38 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 1.66914 seconds (Warm-up)
Chain 3:                0.446612 seconds (Sampling)
Chain 3:                2.11575 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.000116 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 1.16 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 1.81109 seconds (Warm-up)
Chain 4:                0.464528 seconds (Sampling)
Chain 4:                2.27562 seconds (Total)
Chain 4: 
summary(experiment2.fit2b)
 Family: bernoulli 
  Links: mu = logit 
Formula: Correct ~ Mimicry.Score + (1 | Participant) 
   Data: data.amal.long (Number of observations: 516) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~Participant (Number of levels: 49) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.31      0.17     0.02     0.67 1.00      952     1008

Population-Level Effects: 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept         0.78      0.57    -0.30     1.93 1.00     5106     2780
Mimicry.Score    -0.00      0.01    -0.02     0.01 1.00     5144     2920

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
quantile(logistic(experiment2.fit2b.post$b_Mimicry.Score),c(0.025,0.975))
     2.5%     97.5% 
0.4944419 0.5030246 
#posterior predictive check

pp_check(experiment2.fit2b)
Using 10 posterior draws for ppc type 'dens_overlay' by default.

experiment2.fit3.b <- brm(Correct~Listener.Speaker.Match+(1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
Compiling Stan program...
recompiling to avoid crashing R session
Start sampling

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 9.1e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.91 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 1.53677 seconds (Warm-up)
Chain 1:                1.90173 seconds (Sampling)
Chain 1:                3.4385 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 7e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.7 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 1.65021 seconds (Warm-up)
Chain 2:                1.01599 seconds (Sampling)
Chain 2:                2.6662 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 7.4e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.74 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 1.72065 seconds (Warm-up)
Chain 3:                1.07323 seconds (Sampling)
Chain 3:                2.79388 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 7.2e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.72 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 1.75929 seconds (Warm-up)
Chain 4:                1.99496 seconds (Sampling)
Chain 4:                3.75425 seconds (Total)
Chain 4: 
experiment2.fit3.post <- posterior_samples(experiment2.fit3.b)
Warning: Method 'posterior_samples' is deprecated. Please see ?as_draws for recommended alternatives.
quantile(logistic(experiment2.fit3.post$b_Intercept),prob=c(0.025,0.975)) # Intercept Posterior i.e. when Listener.Speaker.Match=0
     2.5%     97.5% 
0.6329866 0.7047944 
quantile(logistic(experiment2.fit3.post$b_Intercept+experiment2.fit3.post$b_Listener.Speaker.Match),prob=c(0.025,0.975)) # Posterior when b_Listener.Speaker.Match=1
     2.5%     97.5% 
0.5325187 0.6213355 
quantile(logistic(experiment2.fit3.post$b_Intercept) - logistic(experiment2.fit3.post$b_Intercept+experiment2.fit3.post$b_Listener.Speaker.Match),prob=c(0.025,0.975)) #Posterior difference in probability of correct answer
      2.5%      97.5% 
0.03444616 0.14776769 
summary(experiment2.fit3.b)
 Family: bernoulli 
  Links: mu = logit 
Formula: Correct ~ Listener.Speaker.Match + (1 | Participant) 
   Data: data.amal.long (Number of observations: 1252) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~Participant (Number of levels: 147) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.21      0.12     0.01     0.46 1.01      902     1375

Population-Level Effects: 
                       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept                  0.70      0.08     0.55     0.87 1.00     4308     2590
Listener.Speaker.Match    -0.39      0.12    -0.63    -0.15 1.00     4870     2508

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
pp_check(experiment2.fit3.b)
Using 10 posterior draws for ppc type 'dens_overlay' by default.

plotting model


experiment2.fit3.post$LSM <- experiment2.fit3.post$b_Intercept+experiment2.fit3.post$b_Listener.Speaker.Match

experiment2.fit3.post.long <- pivot_longer(experiment2.fit3.post,cols=c(b_Intercept,LSM))
experiment2.fit3.post.long$value <- logistic(experiment2.fit3.post.long$value)

ggplot(experiment2.fit3.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c("No","Yes")) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("Listener-Speaker Match") +
  theme_ridges()
Picking joint bandwidth of 0.00349

experiment2.fit4b <- brm(Correct~Raised.Accent.Match+(1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
Compiling Stan program...
recompiling to avoid crashing R session
Start sampling

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000198 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.98 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 1.78323 seconds (Warm-up)
Chain 1:                1.98665 seconds (Sampling)
Chain 1:                3.76988 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 9.2e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.92 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 1.69445 seconds (Warm-up)
Chain 2:                1.0396 seconds (Sampling)
Chain 2:                2.73405 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 9.8e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.98 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 1.89521 seconds (Warm-up)
Chain 3:                1.05444 seconds (Sampling)
Chain 3:                2.94965 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL '3c49edce2ab6d1cc57670e5d096c75b2' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.00015 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 1.5 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 1.66437 seconds (Warm-up)
Chain 4:                1.01846 seconds (Sampling)
Chain 4:                2.68283 seconds (Total)
Chain 4: 
Warning: There were 2 divergent transitions after warmup. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems
experiment2.fit4.post <- posterior_samples(experiment2.fit4b)
Warning: Method 'posterior_samples' is deprecated. Please see ?as_draws for recommended alternatives.
quantile(logistic(experiment2.fit4.post$b_Intercept),prob=c(0.025,0.975)) # Intercept Posterior
     2.5%     97.5% 
0.6097672 0.6813327 
quantile(logistic(experiment2.fit4.post$b_Intercept+experiment2.fit4.post$b_Raised.Accent.Match),prob=c(0.025,0.975))  # Posterior when b_Raised.Accent.Match=1
     2.5%     97.5% 
0.5590427 0.6515391 
quantile(logistic(experiment2.fit4.post$b_Intercept) - logistic(experiment2.fit4.post$b_Intercept+experiment2.fit4.post$b_Raised.Accent.Match),prob=c(0.025,0.975)) #Posterior difference in probability of correct answer
       2.5%       97.5% 
-0.01802452  0.09636362 
#evaluating divergences

pairs(experiment2.fit4b, las = 1)
Warning: The following arguments were unrecognized and ignored: las

summary(experiment2.fit4b)
Warning: There were 5 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
 Family: bernoulli 
  Links: mu = logit 
Formula: Correct ~ Raised.Accent.Match + (1 | Participant) 
   Data: data.amal.long (Number of observations: 1252) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~Participant (Number of levels: 147) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.19      0.12     0.01     0.43 1.01      640     1666

Population-Level Effects: 
                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept               0.59      0.08     0.44     0.75 1.00     4282     2427
Raised.Accent.Match    -0.17      0.13    -0.42     0.08 1.00     4934     2588

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
pp_check(experiment2.fit4b)
Using 10 posterior draws for ppc type 'dens_overlay' by default.

experiment2.fit5b <- brm(Correct~ (Raised.General-1) * Listener.Speaker.Match + (1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
Compiling Stan program...
recompiling to avoid crashing R session
Start sampling

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000101 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.01 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 1.99406 seconds (Warm-up)
Chain 1:                1.1332 seconds (Sampling)
Chain 1:                3.12726 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000112 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.12 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 2.11137 seconds (Warm-up)
Chain 2:                1.11253 seconds (Sampling)
Chain 2:                3.2239 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 8.2e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.82 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 2.08776 seconds (Warm-up)
Chain 3:                1.13353 seconds (Sampling)
Chain 3:                3.22129 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 7.9e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.79 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 1.77877 seconds (Warm-up)
Chain 4:                1.1301 seconds (Sampling)
Chain 4:                2.90886 seconds (Total)
Chain 4: 
summary(experiment2.fit5b)
 Family: bernoulli 
  Links: mu = logit 
Formula: Correct ~ (Raised.General - 1) * Listener.Speaker.Match + (1 | Participant) 
   Data: data.amal.long (Number of observations: 1252) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~Participant (Number of levels: 147) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.21      0.12     0.01     0.45 1.01      676     1162

Population-Level Effects: 
                                                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Raised.GeneralBritishIsles                              1.01      0.17     0.67     1.35 1.00     1884     2443
Raised.GeneralNorthAmerica                              0.44      0.14     0.17     0.72 1.00     3460     2728
Raised.GeneralRestoftheWorld                            0.76      0.13     0.51     1.02 1.00     3603     2766
Listener.Speaker.Match                                 -0.71      0.20    -1.09    -0.32 1.00     1806     2269
Raised.GeneralNorthAmerica:Listener.Speaker.Match       0.80      0.39     0.07     1.59 1.00     2470     2954
Raised.GeneralRestoftheWorld:Listener.Speaker.Match     0.08      0.42    -0.72     0.92 1.00     2527     2411

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
experiment2.fit5.post <- posterior_samples(experiment2.fit5b)[,1:7]
Warning: Method 'posterior_samples' is deprecated. Please see ?as_draws for recommended alternatives.
pp_check(experiment2.fit5b)
Using 10 posterior draws for ppc type 'dens_overlay' by default.

#British Isle, No Listener Speaker Match
quantile(logistic(experiment2.fit5.post$b_Raised.GeneralBritishIsles),c(0.025,0.975))
     2.5%     97.5% 
0.6624766 0.7937186 
#British Isle, Listener Speaker Match
quantile(logistic(experiment2.fit5.post$b_Raised.GeneralBritishIsles+experiment2.fit5.post$b_Listener.Speaker.Match),c(0.025,0.975))
     2.5%     97.5% 
0.5246164 0.6214881 
#NorthAmerica, No Listener Speaker Match
quantile(logistic(experiment2.fit5.post$b_Raised.GeneralNorthAmerica),c(0.025,0.975))
     2.5%     97.5% 
0.5423182 0.6735195 
#NorthAmerica, Listener Speaker Match
quantile(logistic(experiment2.fit5.post$`b_Raised.GeneralNorthAmerica`+experiment2.fit5.post$`b_Listener.Speaker.Match`+experiment2.fit5.post$`b_Raised.GeneralNorthAmerica:Listener.Speaker.Match`),c(0.025,0.975))
     2.5%     97.5% 
0.4921926 0.7560025 
#RestoftheWorld, No Listener Speaker Match
quantile(logistic(experiment2.fit5.post$b_Raised.GeneralRestoftheWorld),c(0.025,0.975))
     2.5%     97.5% 
0.6249040 0.7349015 
#RestoftheWorld, Listener Speaker Match
quantile(logistic(experiment2.fit5.post$`b_Raised.GeneralRestoftheWorld`+experiment2.fit5.post$`b_Listener.Speaker.Match`+experiment2.fit5.post$`b_Raised.GeneralRestoftheWorld:Listener.Speaker.Match`),c(0.025,0.975))
     2.5%     97.5% 
0.3604592 0.7000265 

plotting model

experiment2.fit5.post$Raised.General.BI.LSM <- experiment2.fit5.post$b_Raised.GeneralBritishIsles+experiment2.fit5.post$b_Listener.Speaker.Match

experiment2.fit5.post$Raised.General.NA.LSM <- experiment2.fit5.post$`b_Raised.GeneralNorthAmerica`+experiment2.fit5.post$`b_Listener.Speaker.Match`+experiment2.fit5.post$`b_Raised.GeneralNorthAmerica:Listener.Speaker.Match`

experiment2.fit5.post$Raised.General.RW.LSM <- experiment2.fit5.post$`b_Raised.GeneralRestoftheWorld`+experiment2.fit5.post$`b_Listener.Speaker.Match`+experiment2.fit5.post$`b_Raised.GeneralRestoftheWorld:Listener.Speaker.Match`


experiment2.fit5.post.long <- pivot_longer(experiment2.fit5.post,cols=c(b_Raised.GeneralBritishIsles,Raised.General.BI.LSM,b_Raised.GeneralNorthAmerica,Raised.General.NA.LSM,b_Raised.GeneralRestoftheWorld,Raised.General.RW.LSM))
experiment2.fit5.post.long$value <- logistic(experiment2.fit5.post.long$value)

ggplot(experiment2.fit5.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c("British Isles, No","British Isles, Yes","North America, No","North America, Yes","Rest of the World, No","Rest of the World, Yes")) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("Region and Listener-Speaker Match") +
  theme_ridges()
Picking joint bandwidth of 0.00777


# there are no Rest of the World Partecipants with Raised Accent Match ==1 so dropping the level
data.amal.long2 <- subset(data.amal.long, Raised.General!='Rest of the World')
data.amal.long2$Raised.General = as.character(data.amal.long2$Raised.General)

experiment2.fit6b <- brm(Correct~ (Raised.General-1) * Raised.Accent.Match +(1|Participant), prior=bprior, data=data.amal.long2, family=bernoulli)
Compiling Stan program...
recompiling to avoid crashing R session
Start sampling

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 8.2e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.82 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 1.3818 seconds (Warm-up)
Chain 1:                0.904745 seconds (Sampling)
Chain 1:                2.28655 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 7e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.7 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 1.33585 seconds (Warm-up)
Chain 2:                0.878154 seconds (Sampling)
Chain 2:                2.21401 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 6.4e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.64 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 1.30053 seconds (Warm-up)
Chain 3:                1.59002 seconds (Sampling)
Chain 3:                2.89055 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 6.2e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.62 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 1.47945 seconds (Warm-up)
Chain 4:                1.02577 seconds (Sampling)
Chain 4:                2.50522 seconds (Total)
Chain 4: 
summary(experiment2.fit6b)
 Family: bernoulli 
  Links: mu = logit 
Formula: Correct ~ (Raised.General - 1) * Raised.Accent.Match + (1 | Participant) 
   Data: data.amal.long2 (Number of observations: 912) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~Participant (Number of levels: 110) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.18      0.12     0.01     0.43 1.00      957     1644

Population-Level Effects: 
                                               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Raised.GeneralBritishIsles                         0.62      0.13     0.37     0.89 1.00     2445     2806
Raised.GeneralNorthAmerica                         0.38      0.16     0.06     0.70 1.00     4441     3140
Raised.Accent.Match                               -0.25      0.17    -0.59     0.09 1.00     2278     2791
Raised.GeneralNorthAmerica:Raised.Accent.Match     0.43      0.30    -0.15     1.02 1.00     2715     2747

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
experiment2.fit6.post <- posterior_samples(experiment2.fit6b)
Warning: Method 'posterior_samples' is deprecated. Please see ?as_draws for recommended alternatives.
pp_check(experiment2.fit6b)
Using 10 posterior draws for ppc type 'dens_overlay' by default.

#British Isle, No Raised.Accent.Match
quantile(logistic(experiment2.fit6.post$b_Raised.GeneralBritishIsles),c(0.025,0.975))
     2.5%     97.5% 
0.5922590 0.7088123 
#British Isle, Raised.Accent.Match
quantile(logistic(experiment2.fit6.post$b_Raised.GeneralBritishIsles+experiment2.fit6.post$b_Raised.Accent.Match),c(0.025,0.975))
     2.5%     97.5% 
0.5369874 0.6455197 
#NorthAmerica, No Raised.Accent.Match
quantile(logistic(experiment2.fit6.post$b_Raised.GeneralNorthAmerica),c(0.025,0.975))
     2.5%     97.5% 
0.5157085 0.6681039 
#NorthAmerica, Raised.Accent.Match
quantile(logistic(experiment2.fit6.post$`b_Raised.GeneralNorthAmerica`+experiment2.fit6.post$`b_Raised.Accent.Match`+experiment2.fit6.post$`b_Raised.GeneralNorthAmerica:Raised.Accent.Match`),c(0.025,0.975))
     2.5%     97.5% 
0.5490473 0.7185427 

plotting model

experiment2.fit6.post$Raised.General.BI.RAM <- experiment2.fit6.post$b_Raised.GeneralBritishIsles+experiment2.fit6.post$b_Raised.Accent.Match

experiment2.fit6.post$Raised.General.NA.RAM <- experiment2.fit6.post$`b_Raised.GeneralNorthAmerica`+experiment2.fit6.post$`b_Raised.Accent.Match`+experiment2.fit6.post$`b_Raised.GeneralNorthAmerica:Raised.Accent.Match`


experiment2.fit6.post.long <- pivot_longer(experiment2.fit6.post,cols=c(b_Raised.GeneralBritishIsles,Raised.General.BI.RAM,b_Raised.GeneralNorthAmerica,Raised.General.NA.RAM))
experiment2.fit6.post.long$value <- logistic(experiment2.fit6.post.long$value)

ggplot(experiment2.fit6.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c("British Isles, No","British Isles, Yes","North America, No","North America, Yes")) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("Region and Listener-Accent Match") +
  theme_ridges()
Picking joint bandwidth of 0.00596

#Evaluate detection in experiment 2 by quality of fake according to scoring system in experiment 1

experiment2.questions <- data.frame(Q=c(1:12), Fake.quality=c(100,80,60,"Not Fake","Not Fake",100,"Not Fake",87.5,80,"Not Fake","Not Fake","Not Fake"))
experiment2.questions$Fake.quality <- factor(experiment2.questions$Fake.quality, levels=c("Not Fake",60,80,87.5,100))

data.amal.long.temp$Fake.quality <- rep(experiment2.questions$Fake.quality, 147)

# convert to actual numbers
data.amal.long.temp$Correct <- as.numeric(data.amal.long.temp$Correct)-1
data.amal.long <- data.amal.long.temp[!is.na(data.amal.long.temp$Correct),]
experiment2.fit7 <- brm(Correct~(Fake.quality-1)+(1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
Compiling Stan program...
recompiling to avoid crashing R session
Start sampling

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 9.7e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.97 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 1.96279 seconds (Warm-up)
Chain 1:                1.52971 seconds (Sampling)
Chain 1:                3.4925 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000174 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.74 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 1.90653 seconds (Warm-up)
Chain 2:                1.80096 seconds (Sampling)
Chain 2:                3.70749 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 7.9e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.79 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 1.87842 seconds (Warm-up)
Chain 3:                1.13701 seconds (Sampling)
Chain 3:                3.01543 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'b87d5b571691a2d73453300cba0cc30a' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.000118 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 1.18 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 1.90253 seconds (Warm-up)
Chain 4:                1.31274 seconds (Sampling)
Chain 4:                3.21527 seconds (Total)
Chain 4: 
Warning: There were 1 divergent transitions after warmup. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems
experiment2.fit7.post <- posterior_samples(experiment2.fit7)
Warning: Method 'posterior_samples' is deprecated. Please see ?as_draws for recommended alternatives.
summary(experiment2.fit7)
Warning: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
 Family: bernoulli 
  Links: mu = logit 
Formula: Correct ~ (Fake.quality - 1) + (1 | Participant) 
   Data: data.amal.long (Number of observations: 1252) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~Participant (Number of levels: 147) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     0.23      0.12     0.02     0.47 1.01      756     1727

Population-Level Effects: 
                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Fake.qualityNotFake     0.45      0.09     0.28     0.62 1.00     4367     2744
Fake.quality60         -0.35      0.21    -0.78     0.05 1.00     4012     2648
Fake.quality80          0.93      0.15     0.63     1.23 1.00     3964     2775
Fake.quality87.5        1.14      0.25     0.68     1.64 1.00     4396     2639
Fake.quality100         0.60      0.14     0.33     0.88 1.00     4731     2361

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
pp_check(experiment2.fit7)
Using 10 posterior draws for ppc type 'dens_overlay' by default.

#evaluating divergences

pairs(experiment2.fit7, las = 1)
Warning: The following arguments were unrecognized and ignored: las

#confidence intervals

#Not Fake
quantile(logistic(experiment2.fit7.post$b_Fake.qualityNotFake),c(0.025,0.975))
     2.5%     97.5% 
0.5706674 0.6513117 
#60
quantile(logistic(experiment2.fit7.post$b_Fake.quality60),c(0.025,0.975))
     2.5%     97.5% 
0.3138667 0.5121018 
#80
quantile(logistic(experiment2.fit7.post$b_Fake.quality80),c(0.025,0.975))
     2.5%     97.5% 
0.6530918 0.7743257 
#87.5
quantile(logistic(experiment2.fit7.post$b_Fake.quality87.5),c(0.025,0.975))
     2.5%     97.5% 
0.6632454 0.8377182 
#100
quantile(logistic(experiment2.fit7.post$b_Fake.quality100),c(0.025,0.975))
     2.5%     97.5% 
0.5810652 0.7058803 
#plotting

experiment2.fit7.post.long <- pivot_longer(experiment2.fit7.post,cols=c(b_Fake.qualityNotFake,b_Fake.quality60,b_Fake.quality80,b_Fake.quality87.5,b_Fake.quality100))
experiment2.fit7.post.long$value <- logistic(experiment2.fit7.post.long$value)

ggplot(experiment2.fit7.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c("Genuine speaker","60% correct","80% correct","87.5% correct","100% correct")) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("Quality of mimicry") +
  theme_ridges()
Picking joint bandwidth of 0.00614

---
title: "R scripts for reproducing accents as signals study"
output: html_notebook
author: "Jonathan R Goodman, Enrico Crema"
---

# Load libraries
```{r}
library(brms)
library(tidyr)
library(ggplot2)
library(ggridges)
```

# Utility function

Inverse logit function for converting fitted models into binomial probabilities
```{r}
logistic <- function (x) 
{
  p <- 1/(1 + exp(-x))
  p <- ifelse(x == Inf, 1, p)
  p
}

```


# Experiment 1

```{r}
load("./updated-initial-mimicry-data.RData")
initial.mimicry.data <- initial.mimicry.data[which(!is.na(initial.mimicry.data$Correct)),]
bprior <- bprior <- c(prior_string("normal(0,1)", class = "b"))
experiment1.fitb <- brm(Correct~(Try-1)+(1|Participant), prior=bprior, data=initial.mimicry.data, family="bernoulli") # this will take longer than glmer
summary(experiment1.fitb) 
```


```{r}
experiment1.fit <- glmer(Correct~(Try-1)+(1|Participant), data=initial.mimicry.data, family="binomial") 
summary(experiment1.fit)
```

```{r}
prior_summary(experiment1.fitb)
```


```{r}
#posterior predictive check

pp_check(experiment1.fitb)
```

```{r}
# Extract MCMC samples
experiment1.fitb.post <- posterior_samples(experiment1.fitb)

# Compute CI in after conversion into probability 
quantile(logistic(experiment1.fitb.post$b_TryTry1),c(0.025,0.975))
quantile(logistic(experiment1.fitb.post$b_TryTry4),c(0.025,0.975))

# Compute the CI of the *difference* betweeb the two tries
quantile(logistic(experiment1.fitb.post$b_TryTry4)-logistic(experiment1.fitb.post$b_TryTry1),c(0.025,0.975))
```


```{r}
hist(logistic(experiment1.fitb.post$b_TryTry4)-logistic(experiment1.fitb.post$b_TryTry1))
```


```{r}
experiment1.fitb.post.long = pivot_longer(experiment1.fitb.post,cols=c(b_TryTry1,b_TryTry4))
experiment1.fitb.post.long$value = logistic(experiment1.fitb.post.long$value)


ggplot(experiment1.fitb.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c('One','Four')) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("Try Number") +
  theme_ridges()
```

# Experiment 2

```{r}
load("./mimicry-analyses.RData")

# convert to actual numbers
data.amal.long$Correct <- as.numeric(data.amal.long$Correct)-1
data.amal.long <- data.amal.long[!is.na(data.amal.long$Correct),]
```

Model for overall probability of success

```{r}
iprior <- c(prior_string("normal(0,5)", class = "Intercept"))
experiment2.fit1 <- brm(Correct~1+(1|Participant), prior=iprior, data=data.amal.long, family=bernoulli)
experiment2.fit1.post <- posterior_samples(experiment2.fit1)

```

```{r}
summary(experiment2.fit1)
quantile(logistic(experiment2.fit1.post$b_Intercept),c(0.025,0.975))
```


```{r}
bprior <- bprior <- c(prior_string("normal(0,5)", class = "b"))
experiment2.fit1b <- brm(Correct~(Raised.General-1)+(1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
experiment2.fit1b.post <- posterior_samples(experiment2.fit1b)

```

```{r}
summary(experiment2.fit1b)

quantile(logistic(experiment2.fit1b.post$b_Raised.GeneralBritishIsles),c(0.025,0.975))
quantile(logistic(experiment2.fit1b.post$b_Raised.GeneralNorthAmerica),c(0.025,0.975))
quantile(logistic(experiment2.fit1b.post$b_Raised.GeneralRestoftheWorld),c(0.025,0.975))
```

```{r}
#posterior predictive check

pp_check(experiment2.fit1b)
```

plotting out experiment 2 fit 1b

```{r}
experiment2.fit1b.post.long <- pivot_longer(experiment2.fit1b.post,cols=c(b_Raised.GeneralBritishIsles,b_Raised.GeneralNorthAmerica,b_Raised.GeneralRestoftheWorld))
experiment2.fit1b.post.long$value <- logistic(experiment2.fit1b.post.long$value)

ggplot(experiment2.fit1b.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c('British Isles','North America','Rest of the World')) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("General area raised") +
  theme_ridges()
```


```{r}
experiment2.fit2b <- brm(Correct~Mimicry.Score+(1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
summary(experiment2.fit2b)
quantile(logistic(experiment2.fit2b.post$b_Mimicry.Score),c(0.025,0.975))
```

```{r}
#posterior predictive check

pp_check(experiment2.fit2b)
```


```{r}
experiment2.fit3.b <- brm(Correct~Listener.Speaker.Match+(1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
experiment2.fit3.post <- posterior_samples(experiment2.fit3.b)

quantile(logistic(experiment2.fit3.post$b_Intercept),prob=c(0.025,0.975)) # Intercept Posterior i.e. when Listener.Speaker.Match=0
quantile(logistic(experiment2.fit3.post$b_Intercept+experiment2.fit3.post$b_Listener.Speaker.Match),prob=c(0.025,0.975)) # Posterior when b_Listener.Speaker.Match=1
quantile(logistic(experiment2.fit3.post$b_Intercept) - logistic(experiment2.fit3.post$b_Intercept+experiment2.fit3.post$b_Listener.Speaker.Match),prob=c(0.025,0.975)) #Posterior difference in probability of correct answer
```


```{r}
summary(experiment2.fit3.b)
```

```{r}
pp_check(experiment2.fit3.b)
```


plotting model
```{r}

experiment2.fit3.post$LSM <- experiment2.fit3.post$b_Intercept+experiment2.fit3.post$b_Listener.Speaker.Match

experiment2.fit3.post.long <- pivot_longer(experiment2.fit3.post,cols=c(b_Intercept,LSM))
experiment2.fit3.post.long$value <- logistic(experiment2.fit3.post.long$value)

ggplot(experiment2.fit3.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c("No","Yes")) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("Listener-Speaker Match") +
  theme_ridges()
```


```{r}
experiment2.fit4b <- brm(Correct~Raised.Accent.Match+(1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
experiment2.fit4.post <- posterior_samples(experiment2.fit4b)

quantile(logistic(experiment2.fit4.post$b_Intercept),prob=c(0.025,0.975)) # Intercept Posterior
quantile(logistic(experiment2.fit4.post$b_Intercept+experiment2.fit4.post$b_Raised.Accent.Match),prob=c(0.025,0.975))  # Posterior when b_Raised.Accent.Match=1
quantile(logistic(experiment2.fit4.post$b_Intercept) - logistic(experiment2.fit4.post$b_Intercept+experiment2.fit4.post$b_Raised.Accent.Match),prob=c(0.025,0.975)) #Posterior difference in probability of correct answer
```

```{r}
#evaluating divergences

pairs(experiment2.fit4b, las = 1)
```


```{r}
summary(experiment2.fit4b)
pp_check(experiment2.fit4b)
```



```{r}
experiment2.fit5b <- brm(Correct~ (Raised.General-1) * Listener.Speaker.Match + (1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
summary(experiment2.fit5b)

experiment2.fit5.post <- posterior_samples(experiment2.fit5b)[,1:7]

```

```{r}
pp_check(experiment2.fit5b)
```


```{r}
#British Isle, No Listener Speaker Match
quantile(logistic(experiment2.fit5.post$b_Raised.GeneralBritishIsles),c(0.025,0.975))

#British Isle, Listener Speaker Match
quantile(logistic(experiment2.fit5.post$b_Raised.GeneralBritishIsles+experiment2.fit5.post$b_Listener.Speaker.Match),c(0.025,0.975))

#NorthAmerica, No Listener Speaker Match
quantile(logistic(experiment2.fit5.post$b_Raised.GeneralNorthAmerica),c(0.025,0.975))

#NorthAmerica, Listener Speaker Match
quantile(logistic(experiment2.fit5.post$`b_Raised.GeneralNorthAmerica`+experiment2.fit5.post$`b_Listener.Speaker.Match`+experiment2.fit5.post$`b_Raised.GeneralNorthAmerica:Listener.Speaker.Match`),c(0.025,0.975))

#RestoftheWorld, No Listener Speaker Match
quantile(logistic(experiment2.fit5.post$b_Raised.GeneralRestoftheWorld),c(0.025,0.975))

#RestoftheWorld, Listener Speaker Match
quantile(logistic(experiment2.fit5.post$`b_Raised.GeneralRestoftheWorld`+experiment2.fit5.post$`b_Listener.Speaker.Match`+experiment2.fit5.post$`b_Raised.GeneralRestoftheWorld:Listener.Speaker.Match`),c(0.025,0.975))

```

plotting model

```{r}
experiment2.fit5.post$Raised.General.BI.LSM <- experiment2.fit5.post$b_Raised.GeneralBritishIsles+experiment2.fit5.post$b_Listener.Speaker.Match

experiment2.fit5.post$Raised.General.NA.LSM <- experiment2.fit5.post$`b_Raised.GeneralNorthAmerica`+experiment2.fit5.post$`b_Listener.Speaker.Match`+experiment2.fit5.post$`b_Raised.GeneralNorthAmerica:Listener.Speaker.Match`

experiment2.fit5.post$Raised.General.RW.LSM <- experiment2.fit5.post$`b_Raised.GeneralRestoftheWorld`+experiment2.fit5.post$`b_Listener.Speaker.Match`+experiment2.fit5.post$`b_Raised.GeneralRestoftheWorld:Listener.Speaker.Match`


experiment2.fit5.post.long <- pivot_longer(experiment2.fit5.post,cols=c(b_Raised.GeneralBritishIsles,Raised.General.BI.LSM,b_Raised.GeneralNorthAmerica,Raised.General.NA.LSM,b_Raised.GeneralRestoftheWorld,Raised.General.RW.LSM))
experiment2.fit5.post.long$value <- logistic(experiment2.fit5.post.long$value)

ggplot(experiment2.fit5.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c("British Isles, No","British Isles, Yes","North America, No","North America, Yes","Rest of the World, No","Rest of the World, Yes")) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("Region and Listener-Speaker Match") +
  theme_ridges()
```



```{r}

# there are no Rest of the World Partecipants with Raised Accent Match ==1 so dropping the level
data.amal.long2 <- subset(data.amal.long, Raised.General!='Rest of the World')
data.amal.long2$Raised.General = as.character(data.amal.long2$Raised.General)

experiment2.fit6b <- brm(Correct~ (Raised.General-1) * Raised.Accent.Match +(1|Participant), prior=bprior, data=data.amal.long2, family=bernoulli)
summary(experiment2.fit6b)
experiment2.fit6.post <- posterior_samples(experiment2.fit6b)

```

```{r}
pp_check(experiment2.fit6b)
```


```{r}
#British Isle, No Raised.Accent.Match
quantile(logistic(experiment2.fit6.post$b_Raised.GeneralBritishIsles),c(0.025,0.975))

#British Isle, Raised.Accent.Match
quantile(logistic(experiment2.fit6.post$b_Raised.GeneralBritishIsles+experiment2.fit6.post$b_Raised.Accent.Match),c(0.025,0.975))

#NorthAmerica, No Raised.Accent.Match
quantile(logistic(experiment2.fit6.post$b_Raised.GeneralNorthAmerica),c(0.025,0.975))

#NorthAmerica, Raised.Accent.Match
quantile(logistic(experiment2.fit6.post$`b_Raised.GeneralNorthAmerica`+experiment2.fit6.post$`b_Raised.Accent.Match`+experiment2.fit6.post$`b_Raised.GeneralNorthAmerica:Raised.Accent.Match`),c(0.025,0.975))

```

plotting model

```{r}
experiment2.fit6.post$Raised.General.BI.RAM <- experiment2.fit6.post$b_Raised.GeneralBritishIsles+experiment2.fit6.post$b_Raised.Accent.Match

experiment2.fit6.post$Raised.General.NA.RAM <- experiment2.fit6.post$`b_Raised.GeneralNorthAmerica`+experiment2.fit6.post$`b_Raised.Accent.Match`+experiment2.fit6.post$`b_Raised.GeneralNorthAmerica:Raised.Accent.Match`


experiment2.fit6.post.long <- pivot_longer(experiment2.fit6.post,cols=c(b_Raised.GeneralBritishIsles,Raised.General.BI.RAM,b_Raised.GeneralNorthAmerica,Raised.General.NA.RAM))
experiment2.fit6.post.long$value <- logistic(experiment2.fit6.post.long$value)

ggplot(experiment2.fit6.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c("British Isles, No","British Isles, Yes","North America, No","North America, Yes")) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("Region and Listener-Accent Match") +
  theme_ridges()
```
```{r}
#Evaluate detection in experiment 2 by quality of fake according to scoring system in experiment 1

experiment2.questions <- data.frame(Q=c(1:12), Fake.quality=c(100,80,60,"Not Fake","Not Fake",100,"Not Fake",87.5,80,"Not Fake","Not Fake","Not Fake"))
experiment2.questions$Fake.quality <- factor(experiment2.questions$Fake.quality, levels=c("Not Fake",60,80,87.5,100))

data.amal.long.temp$Fake.quality <- rep(experiment2.questions$Fake.quality, 147)

# convert to actual numbers
data.amal.long.temp$Correct <- as.numeric(data.amal.long.temp$Correct)-1
data.amal.long <- data.amal.long.temp[!is.na(data.amal.long.temp$Correct),]

```

```{r}
experiment2.fit7 <- brm(Correct~(Fake.quality-1)+(1|Participant), prior=bprior, data=data.amal.long, family=bernoulli)
experiment2.fit7.post <- posterior_samples(experiment2.fit7)
summary(experiment2.fit7)
```

```{r}
pp_check(experiment2.fit7)
```


```{r}
#evaluating divergences

pairs(experiment2.fit7, las = 1)
```

```{r}
#confidence intervals

#Not Fake
quantile(logistic(experiment2.fit7.post$b_Fake.qualityNotFake),c(0.025,0.975))

#60
quantile(logistic(experiment2.fit7.post$b_Fake.quality60),c(0.025,0.975))

#80
quantile(logistic(experiment2.fit7.post$b_Fake.quality80),c(0.025,0.975))

#87.5
quantile(logistic(experiment2.fit7.post$b_Fake.quality87.5),c(0.025,0.975))

#100
quantile(logistic(experiment2.fit7.post$b_Fake.quality100),c(0.025,0.975))

```

```{r}
#plotting

experiment2.fit7.post.long <- pivot_longer(experiment2.fit7.post,cols=c(b_Fake.qualityNotFake,b_Fake.quality60,b_Fake.quality80,b_Fake.quality87.5,b_Fake.quality100))
experiment2.fit7.post.long$value <- logistic(experiment2.fit7.post.long$value)

ggplot(experiment2.fit7.post.long, aes(x = value, y = name, fill = factor(stat(quantile)))) +
  stat_density_ridges(
    geom = "density_ridges_gradient",
    calc_ecdf = TRUE,
    quantiles = c(0.025, 0.975),
    show.legend = FALSE,
    scale = 2,
    alpha = 0.7
  ) +
  scale_y_discrete(labels = c("Genuine speaker","60% correct","80% correct","87.5% correct","100% correct")) +
  scale_fill_manual(name = "Posterior Probability", values = c("lightgrey", "lightblue", "lightgrey"),) +
  xlab("Probability") + ylab("Quality of mimicry") +
  theme_ridges()
```

